Goto

Collaborating Authors

 Ellicott City


Design and Evaluation of a Compliant Quasi Direct Drive End-effector for Safe Robotic Ultrasound Imaging

Chen, Danyi, Prakash, Ravi, Chen, Zacharias, Dias, Sarah, Wang, Vincent, Bridgeman, Leila, Oca, Siobhan

arXiv.org Artificial Intelligence

Robot-assisted ultrasound scanning promises to advance autonomous and accessible medical imaging. However, ensuring patient safety and compliant human-robot interaction (HRI) during probe contact poses a significant challenge. Most existing systems either have high mechanical stiffness or are compliant but lack sufficient force and precision. This paper presents a novel single-degree-of-freedom end-effector for safe and accurate robotic ultrasound imaging, using a quasi-direct drive actuator to achieve both passive mechanical compliance and precise active force regulation, even during motion. The end-effector demonstrates an effective force control bandwidth of 100 Hz and can apply forces ranging from 2.5N to 15N. To validate the end-effector's performance, we developed a novel ex vivo actuating platform, enabling compliance testing of the end-effector on simulated abdominal breathing and sudden patient movements. Experiments demonstrate that the end-effector can maintain consistent probe contact during simulated respiratory motion at 2.5N, 5N, 10N, and 15N, with an average force tracking RMS error of 0.83N compared to 4.70N on a UR3e robot arm using conventional force control. This system represents the first compliant ultrasound end-effector tested on a tissue platform simulating dynamic movement. The proposed solution provides a novel approach for designing and evaluating compliant robotic ultrasound systems, advancing the path for more compliant and patient-friendly robotic ultrasound systems in clinical settings.


High-school students are making strides in cancer research: 'Gives me hope'

FOX News

The future of cancer research is in good hands. Six high-school students in the U.S. are dedicated to making progress toward improving the diagnostics and treatment of the disease. The students were finalists in this year's Regeneron Science Talent Search, which is the country's oldest and most prestigious science and mathematics competition hosted by the Society for Science in Washington, D.C. "We are thrilled to honor these bright minds dedicated to making strides in cancer research," said Maya Ajmera, president and CEO of the Society for Science, a partner with Regeneron in the Science Talent Search. "These high-school students are not only advancing our understanding of the way cancer presents in the human body, but are paving the way for potential future therapies and helping unlock new possibilities in the fight against this formidable disease." Four of the six student finalists who specialized in cancer research are shown here.


Customizable Avatars with Dynamic Facial Action Coded Expressions (CADyFACE) for Improved User Engagement

Witherow, Megan A., Butler, Crystal, Shields, Winston J., Ilgin, Furkan, Diawara, Norou, Keener, Janice, Harrington, John W., Iftekharuddin, Khan M.

arXiv.org Artificial Intelligence

Customizable 3D avatar-based facial expression stimuli may improve user engagement in behavioral biomarker discovery and therapeutic intervention for autism, Alzheimer's disease, facial palsy, and more. However, there is a lack of customizable avatar-based stimuli with Facial Action Coding System (FACS) action unit (AU) labels. Therefore, this study focuses on (1) FACS-labeled, customizable avatar-based expression stimuli for maintaining subjects' engagement, (2) learning-based measurements that quantify subjects' facial responses to such stimuli, and (3) validation of constructs represented by stimulus-measurement pairs. We propose Customizable Avatars with Dynamic Facial Action Coded Expressions (CADyFACE) labeled with AUs by a certified FACS expert. To measure subjects' AUs in response to CADyFACE, we propose a novel Beta-guided Correlation and Multi-task Expression learning neural network (BeCoME-Net) for multi-label AU detection. The beta-guided correlation loss encourages feature correlation with AUs while discouraging correlation with subject identities for improved generalization. We train BeCoME-Net for unilateral and bilateral AU detection and compare with state-of-the-art approaches. To assess construct validity of CADyFACE and BeCoME-Net, twenty healthy adult volunteers complete expression recognition and mimicry tasks in an online feasibility study while webcam-based eye-tracking and video are collected. We test validity of multiple constructs, including face preference during recognition and AUs during mimicry.


Coping with low data availability for social media crisis message categorisation

Wang, Congcong

arXiv.org Artificial Intelligence

During crisis situations, social media allows people to quickly share information, including messages requesting help. This can be valuable to emergency responders, who need to categorise and prioritise these messages based on the type of assistance being requested. However, the high volume of messages makes it difficult to filter and prioritise them without the use of computational techniques. Fully supervised filtering techniques for crisis message categorisation typically require a large amount of annotated training data, but this can be difficult to obtain during an ongoing crisis and is expensive in terms of time and labour to create. This thesis focuses on addressing the challenge of low data availability when categorising crisis messages for emergency response. It first presents domain adaptation as a solution for this problem, which involves learning a categorisation model from annotated data from past crisis events (source domain) and adapting it to categorise messages from an ongoing crisis event (target domain). In many-to-many adaptation, where the model is trained on multiple past events and adapted to multiple ongoing events, a multi-task learning approach is proposed using pre-trained language models. This approach outperforms baselines and an ensemble approach further improves performance...


AWS CodeGuru uses machine learning to improve code quality

#artificialintelligence

AWS has made its CodeGuru tool generally available for developers. The tool, initially released in preview at the AWS re:Invent conference last December, uses machine learning to make recommendations on how developers can improve the quality of their code quality, as well as identify an application's most expensive lines of code. "CodeGuru helps you improve your application code and reduce compute and infrastructure costs with an automated code reviewer and application profiler that provide intelligent recommendations," said Danilo Poccia, chief evangelist for the EMEA region at AWS, in a blog post. "Using visualizations based on runtime data, you can quickly find the most expensive lines of code of your applications. With CodeGuru, you pay only for what you use."


Digital Harmonic to Bring its Powerful AI-Driven Image and Video Enhancing Solution to the Federal Market

#artificialintelligence

Turning night into day, clearing up fog, and removing cloud cover in image and video sources, in real-time, Digital Harmonic's PurePixel solution will help the Federal mission secure critical areas and communities, enabling our military to make better-informed command and control decisions. Digital Harmonic LLC, a leading image, video, and signal processing technology company, announced today they have chosen to collaborate with Dell Technologies OEM Embedded & Edge Solutions to bring PurePixel, designed on a scalable suite of hardware solutions from edge devices to rack mounted servers, to federal agencies. PurePixel is an upstream pre-processing component to enhance the quality and efficiency of machine learning (ML) and computer vision (CV) algorithms increasing the success of the output for real-time video. PurePixel also improves advanced analytics software with cloud-based ML algorithms for object recognition, object detection, image annotation/labeling, semantic image segmentation analysis, and edge computing capabilities. One of the options for delivering PurePixel to customers is on the Dell EMC PowerEdge C4140 server, which is an ultra-dense, accelerator optimized rack server system purpose-built for artificial intelligence (AI) solutions with a leading GPU-accelerated infrastructure.


Robot kayaks found the basin of an Alaskan glacier is melting 100 TIMES faster than models showed

Daily Mail - Science & tech

Seaborne robots have made a startling discovery beneath a 20-mile glacier in Alaska. The technology found the massive rivers of ice may be melting under the LeConte Glacier much faster than previously thought. Scientists programmed autonomous kayaks to swim near the icy cliffs of the glacier to measure the'ambient meltwater intrusions', which shows how much fresh water is flowing into the ocean from underneath the glacier. The study found ambient melting was 100 times higher than models had estimated. This is the first time experts have been able to analyze plumes of meltwater - the water released when snow or ice melts, where glaciers meet the ocean- because the feat is far too dangerous for ships due to falling ice of slabs from the glacier.


TensorFlow.js brings machine learning to JavaScript

#artificialintelligence

Increasingly in AI, developers want to do more powerful things with browsers, such as speech recognition; image and object recognition; and pattern and anomaly detection. TensorFlow.js aims to put that power in the browser form factor without the need for additional cloud resources or specialized server or chipsets. "This means that even casual app developers looking to add machine learning capabilities to their web-based apps or even mobile apps that leverage JavaScript- or Node.js-based server apps can use TensorFlow.js to add that capability," said Ronald Schmelzer, an analyst at Cognilytica, in Ellicott City, Md. TensorFlow.js runs machine learning models entirely in the browser, using JavaScript and high-level layers API. As TensorFlow presides as the standard library for building machine learning models these days, TensorFlow.js enables JavaScript developers to reuse TensorFlow skills, extensions and models, and will enable more standardization across the field as a whole, said Adam Smith, CEO of San Francisco-based Kite, which uses machine learning to help developers write code.


How Artificial Intelligence Could Prevent Natural Disasters

#artificialintelligence

On May 27, a deluge dumped more than 6 inches of rain in less than three hours on Ellicott City, Maryland, killing one person and transforming Main Street into what looked like Class V river rapids, with cars tossed about like rubber ducks. The National Weather Service put the probability of such a storm at once in 1,000 years. Yet, "it's the second time it's happened in the last three years," says Jeff Allenby, director of conservation technology for Chesapeake Conservancy, an environmental group. Floods are nothing new in Ellicott City, located where two tributaries join the Patapsco River. But Allenby says the floods are getting worse, as development covers what used to be the "natural sponge of a forest" with paved surfaces, rooftops, and lawns.


AI Weekly: How to regulate facial recognition to preserve freedom

#artificialintelligence

Today Microsoft president Brad Smith called for federal regulation of facial recognition software. "In a democratic republic, there is no substitute for decision making by our elected representatives regarding the issues that require the balancing of public safety with the essence of our democratic freedoms. Facial recognition will require the public and private sectors alike to step up -- and to act," Smith wrote in a blog post. Recent events explain why Smith is speaking out now. Last month, while the majority of U.S. citizens was outraged about the idea of separating families who unlawfully entered the United States, Microsoft was criticized by the public and hundreds of its own employees for its contract with Immigration and Customs Enforcement (ICE).